Skip to content

Conversation

@zy1git
Copy link
Contributor

@zy1git zy1git commented Nov 19, 2025

Summary:
Implemented _horizontal_flip_image_cvcuda and _vertical_flip_image_cvcuda kernels using cvcuda.flip operator. The kernels are automatically registered when CVCUDA is available and route cvcuda.Tensor inputs appropriately.

Test Plan:

  • Added test_functional_cvcuda and test_image_correctness_cvcuda tests
  • Verified parity between PyTorch and CVCUDA implementations
  • All tests pass with CVCUDA backend

@pytorch-bot
Copy link

pytorch-bot bot commented Nov 19, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/vision/9277

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 9 New Failures, 1 Cancelled Job, 3 Unrelated Failures

As of commit 98616f4 with merge base 617079d (image):

NEW FAILURES - The following jobs have failed:

CANCELLED JOB - The following job was cancelled. Please retry:

FLAKY - The following jobs failed but were likely due to flakiness present on trunk:

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the cla signed label Nov 19, 2025
@zy1git zy1git force-pushed the cvcuda-flip-transforms branch from 9cb272b to 02c320a Compare November 19, 2025 23:09
@justincdavis
Copy link

@zy1git What is the strategy for creating the tests for the transforms with CV-CUDA backends? Do we want to have all the tests live entirely inside the existing classes or make a new class?

The PRs for gaussian_blur, normalize, and to_dtype I made all use new classes, but I can switch it be more centralized.

@zy1git zy1git force-pushed the cvcuda-flip-transforms branch from 02c320a to 330db00 Compare November 20, 2025 00:29
@zy1git zy1git closed this Nov 20, 2025
@zy1git zy1git reopened this Nov 20, 2025
Copy link
Member

@AntoineSimoulin AntoineSimoulin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for submitting this PR! This is looking good. I added some comments to make sure we have an extensive test coverage:)

@NicolasHug
Copy link
Member

@justincdavis replying to your question in #9277 (comment): we prefer centralizing the tests in the existing test class. The idea is that, as much as possible, we'd just add CV-CUDA as a parametrization entry with pytest.mark.parametrize to the existing tests. Antoine and I left a few comments related to that above. Does that make sense?

@justincdavis
Copy link

@NicolasHug Makes sense! I will follow the comments you and Antoine left on this PR

@zy1git zy1git force-pushed the cvcuda-flip-transforms branch from 330db00 to 98616f4 Compare November 24, 2025 23:25
Copy link
Member

@AntoineSimoulin AntoineSimoulin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for addressing the initial comments! I left some final adjustments to make. Let's also make sure linting and tests are passing!

Comment on lines +1964 to +1971
if isinstance(image, cvcuda.Tensor):
# For CVCUDA input
expected = F.vertical_flip(F.cvcuda_to_tensor(image))
assert_equal(F.cvcuda_to_tensor(actual), expected)
else:
# For PIL/regular image input
expected = F.to_image(F.vertical_flip(F.to_pil_image(image)))
assert_equal(actual, expected)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's:

  • remove the first call to F.cvcuda_to_tensor(image) so that the flip operation is done on the cvcuda tensor
  • simplify with a single assert_equal with:
if isinstance(image, cvcuda.Tensor):
            expected = F.cvcuda_to_tensor(F.vertical_flip(image))
        else:
            expected = F.to_image(F.vertical_flip(F.to_pil_image(image)))
        assert_equal(actual, expected)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See this comment discussion: #9277 (comment)

Comment on lines +1314 to +1324
if isinstance(image, cvcuda.Tensor):
# For CVCUDA input
expected = F.horizontal_flip(F.cvcuda_to_tensor(image))
print("actual is ", F.cvcuda_to_tensor(actual))
print("expected is ", expected)
assert_equal(F.cvcuda_to_tensor(actual), expected)

else:
# For PIL/regular image input
expected = F.to_image(F.horizontal_flip(F.to_pil_image(image)))
assert_equal(actual, expected)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's:

  • remove the print statement
  • remove the first call to F.cvcuda_to_tensor(image) so that the flip operation is done on the cvcuda tensor
  • simplify with a single assert_equal with:
if isinstance(image, cvcuda.Tensor):
            expected = F.cvcuda_to_tensor(F.vertical_flip(image))
        else:
            expected = F.to_image(F.vertical_flip(F.to_pil_image(image)))
        assert_equal(actual, expected)

Copy link
Contributor Author

@zy1git zy1git Nov 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @AntoineSimoulin,

Thanks a lot for the comment. After taking a closer look, I think my implementation actually works well.

1, The thing is for cvcuda input, the actual = fn(image) is a cvcuda tensor but the assert_equal could not handle the cvcuda tensor so I get an error using your suggestion: "E TypeError: No comparison pair was able to handle inputs of type <class 'nvcv.Tensor'> and <class 'nvcv.Tensor'>." Thus I have to convert the actual to the torch tensor and compare by using assert_equal(F.cvcuda_to_tensor(actual), expected). (Did we add the cvcuda tensor comparison into the assert _equal()? If so, I think I can make it consistent with the PIL implementation for this part.)

2, And for the expected = F.horizontal_flip(F.cvcuda_to_tensor(image)), I think the logic is to convert the cvcuda tensor to torch tensor first and then apply the flip to get the expected format just like the PIL image part F.horizontal_flip(F.to_pil_image(image)). And the reason the code uses "expected = F.to_image(F.horizontal_flip(F.to_pil_image(image)))" for PIL image is because the assert_equal can handle the PIL image comparison.

Please let me know if I understand correctly.
I will definitely remove the 'print' statement and the unnecessary comments.

Comment on lines +51 to +53
if CVCUDA_AVAILABLE:
_transformed_types = (torch.Tensor, PIL.Image.Image, cvcuda.Tensor)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel we should maybe add cvcuda.Tensor to _transformed_types in the base class Transform under "vision/torchvision/transforms/v2/_transform.py". This way, we don't have to add it every time. Maybe something like:

# Class attribute defining transformed types. Other types are passed-through without any transformation
    # We support both Types and callables that are able to do further checks on the type of the input.
    _transformed_types: tuple[type | Callable[[Any], bool], ...] = (torch.Tensor, PIL.Image.Image)
    if CVCUDA_AVAILABLE:
        _transformed_types += (cvcuda.Tensor,)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding this, I have run into this issue on the other transform PRs. To solve this, I added a is_cvcuda_tensor function and added this to the _transformed_types and in the query_size function. What is the preferred method for handling this, the tuple setup or using a function?

def test_image_correctness(self, fn, make_input):
image = make_input()
actual = fn(image)
if isinstance(image, cvcuda.Tensor):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can dispatch based on the make_input function itself to avoid issues if CV-CUDA is not available

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants